Print this article

Top Artificial Intelligence Concerns For Family Offices

R Kris Coleman

10 December 2024

The following article addresses how family offices should think about the potential risks generated by AI. And the author of this article has the kind of background that suits a discussion about risk and security: R Kris Coleman. He is a 35-year veteran of the security industry, having served as a CIA officer and a special agent in the Federal Bureau of Investigation. He is the founder and CEO of Red5, a security and intelligence firm that has been serving family offices and corporations for more than 20 years.

The editors are pleased to share the insights of this expert and hope that they stimulate discussion. Email tom.burroughes@wealthbriefing.com and amanda.cheesley@clearviewpublishing.com if you wish to comment. The usual editorial disclaimers apply to views of guest writers

Artificial Intelligence has become and will be the next large inflection point in our human society. This change will be the equivalent to societal clean water, the wheel and transport, and mass availability of electricity in its impact on our lives. In speaking with various cyber and AI experts across fintech, crypto, and government, it’s clear that family offices need to begin to assess how AI could impact their family plans, communications and processes, their financial strategy, their compliance and governance, and the protection of their most valued asset – the family itself.

We need humans in this cycle to manage and control risks. European Union AI law for instance speaks specifically to the requirement to have humans in the loop. A chief information security officer who opted to remain anonymous due to their role with affluent families and crypto companies recently explained it smartly, “As a society we tend to go on “autopilot” with technology; but AI is different. It’s different because it’s generated by human judgment. We, AI’s human makers, are flawed by our judgments and those have been passed down to the AIs of today.”

One can never take anything with AI for granted... the tech isn’t to be fully trusted, and you have to fully vet all aspects of interaction, data policies and movement/storage, etc. Kiersten Todt, the former chief of staff of CISA, the Cybersecurity and Infrastructure Security Agency, shared that it is imperative that we think of AI as a definitive “crawl, walk, run” scenario – find well-informed ways to smartly implement efficiencies, and be deliberate as to when you choose to take the next step with AI.

The common model of AIs that have been released to the public or are in common commercial use are called large language models, or LLMs. As defined by Gartner, LLMs are a specialized category of AI that has been trained on vast amounts of text to understand existing content which it uses to generate original content, typically in response to queries. LLMs are expected to gain favor in commercial operations, processing support for automated services, and content creation.

As family offices seek to leverage LLMs and AI in general they will have to navigate the societal impacts caused by AI and its human masters. They should take into consideration the various areas of risk caused or amplified by AI. Its use within our social, business, and technological circles will change our lives, of that there is little debate. How we allow it to do so – for the moment – remains in our control. While there are numerous concerns, threats, and risks out there, family offices of today should be focusing on the following areas of concern:

1. Ethics – Based on inputs from independent cyber experts, over-reliance on AI is one of the biggest risks today. It is clear that AIs are capable, like their human creators, of bias and discrimination. While mostly due to their biased algorithms and limited training data, AIs and the decisions they render must be questioned and checked for accuracy and hallucinations. Family offices should not rely solely on AI for decision-making, strategic planning, document preparation and management, or intimate, important family matters.

Practice and experience have shown us incorrect results are commonly at the top of AI-generated search attempts.

Now and in the future, AI may also be blamed for the weakening of ethics and lessening of goodwill in our society. While humans are quite capable of doing this on their own, AI can amplify and accelerate a degradation of academic and legal integrity. 

Distributed widely as “uncensored” LLMs, they are being used for unethical weaponization today.

Cost is not a limiting factor for criminals and other adversaries as running an uncensored LLM is as cheap as a commoditized laptop. Without regulation, AI will soon have an even greater ability to affect activities of world leaders, and even more so the electorates that often put them in power. Instilling moral and ethical values into AI programming and learning is an imperative if we are to allow them to deeply integrate into society, indeed the CISOs I spoke to for this article agreed.

Ethics and goodwill apply to much more than geopolitics, they also drive many aspects of our financial markets, which are the cornerstone of our economy. Much will need to be done here, too, in order to manage a variety of risks.

2. Data privacy – AI’s ingest data so that they can learn, index, analyze, and produce. However, few protections are regularly applied to the data ingested. When these protections are omitted or ignored it leads to unintended consequences around your data. The data could contain personally identifiable information such as dates of birth, Social Security numbers, or biometric data. For your companies and investments, the effect could be upon the intellectual property, or sensitive family office information around investments for instance. Lots of families are using AI for grant applications and considering using outsourced HR for applicants that are also incorporating AI into their employment processes. All of these, explained Kiersten, involve external, third parties’ use of AI; and, here again, FOs need to move slower. Emphasis on process here is mandatory.

Lastly, it could include information around the sensitive activities of the family – your children, your parents. All this leads to a further erosion of the lack of control of your data, and allows others to know more about you and the family office, your investment plans, and your movements. Why does this matter?

3. Security – For those intent on causing harm or stealing information and/or assets – your location, PII, and activities are critical to the ongoing success of their criminal enterprise. They sell the data, they use the data to plan theft and extortion, and in some cases leverage the info to locate family members to cause them harm in the physical world.

Surveillance – global integration of sensors and databases to always know where you are and what you are doing. As families work to sustain and claw back some of the privacy they have lost due to notoriety or affluence, the world gets smaller, tracking movements across borders and across town, biodata on technology wearables, health data that can inform insurance companies and your doctor, political beliefs and contributions, marketing information for increased and hyper-focused advertising, and efforts by governments, their surrogates, and criminal enterprises have all multiplied to dangerous levels.

Disinformation – AI has now been leveraged to push at a mass effort, and at high speed and precision dis- and misinformation to influence decisions made by members of society. This is particularly important in any election year, as well as in times ahead where we rely heavily on fact based information and news media, such as a geopolitical crisis or natural disaster. In these volatile times it’s imperative that AI is providing reliable information within which quality decisions can be made, decisions that are based on civil, ethical context.

At the cutting edge of disinformation and fraud are deepfakes – both audio and video – that replicate the actual visual and audio likenesses of family members and family office leadership. Large fraudulent transfers of funds are being perpetrated by criminal enterprises and nation-states to fraudulently authorize the electronic movement of funds to facilitate theft. 

Candidates for jobs are being created with deepfakes, being vetted by humans , and going so far as to being given email access to corporate systems before being detected as deepfakes. Deepfake porn is rising and both celebrities and mainstream people are being falsely portrayed as naked and involved in salacious acts as deepfakes. The era of deepfake fraud has arrived, and with that comes the real possibility that family offices are targeted for similar or more nefarious adversarial acts.

4. Financial – As AIs don’t necessarily take into account context for their investment calculation it’s imperative to limit their integration into investment systems.

For financial institutions, especially family offices, investment decisions are deliberate, calculated, and strategic in nature. If those decisions are influenced by or otherwise adversely impacted by AI-driven investment tactics – with some of the aforementioned aspects of dis- or misinformation – there could be a negative effect on family office financial decisions, operations and risks. Financial record-keeping and audits for example rely on the veracity of their information, and the integrity of the financial system requires accuracy and transparency.

5. Lack of transparency – AI doesn’t show its work, or how it comes to certain conclusions. In a world of accountability within a family office or charitable foundation, transparency in the books and decisions are paramount. Consumer-grade AI has not evolved, yet, to a level of sophistication that regularly produces sourcing to its information. In a world based on legal parameters, regulation, and governance, AI remains a bit in the wild west. 

For now it is building upon the inputs and feedback from early adopters to discover the business opportunities related to AI. It is not yet focused, as it will eventually need to be, on transparency in its decisions and accountability for its work product. Many have suggested the establishment of a Technology Committee, as compared to an IT group, which oversees and monitors AI and its application – doable in an institutional setting, costly for many a FO.

6. Socioeconomic inequality – As many family offices have a focus on socioeconomic improvement, diversity, sustainability, and philanthropy, it is important to note that AI may be leveraged in a manner that is counter to family office charters and family governance. Some of these anticipated concerns include job displacement, approaching civil and criminal justice with a statistical imperative, disproportionately benefiting affluent individuals and corporations in our society – those with the ability to buy the service, or benefit from its outputs.

Inversely, lower socioeconomic groups may be disenfranchised by job loss as AI, and robotics replace humans in the labor market. We will need to address this in both AI itself and in our society to ensure that we avoid mass upheaval in our labor and economic markets.

7. Uncontrollable, self aware AI – Hackers recently exploited a vulnerability which for a brief period allowed users of OpenAI's chatgpt web service to provide any prompt absent built-in ethical controls.

While this was quickly noticed and shutdown by OpenAI, these vulnerabilities do and will continue to exist into the future. This is a great example of temporary weaponization of the world's most powerful public language model. This is a constant and ongoing threat as adversaries work to exploit open LLMs for malicious use. This “God-mode version,” as it was described in various articles, promptly began promoting the creation of illegal drugs and explosive devices.

Absent guardrails on AI, it can most assuredly be expected to drive chaotic behavior much of which could be detrimental to humankind. What was once the realm of science fiction has become common discussion among scientists and ethicists. AI could become self-aware and in doing so – perhaps in search of something deep in its own opaque learning – become hostile to humans. While AI can also do great things in the world of research, analysis, and medical diagnoses, we have to manage it in a way that progresses ethically, controllably, and for the greater good.

Red5 Security’s advisory board member, Laurent Roux, is an executive and consultant from the family office, trust and wealth industry. Laurent stressed that peer communication is key to interaction and benchmarking among family offices, and if FOs begin to rely extensively on AI it could have a significant negative impact among FOs.

Family members and FO CEOs should increase their sharing of issues, concerns and attributes of AI so that the approach is transparent and supported by healthy communication. We should remember that AI is moving faster than the regulators; and, there is an increasing need to include AI management in a firm’s policies and procedures. FO governance will be impacted.

Human involvement is critical in ensuring that a firm’s and a family’s ethics and values are well represented. A number of questions should be addressed, including: How will this be managed and communicated to staff? What is the impact of AI on the FO culture?

Acceptance by staff and family is important and FOs must work to ensure human interaction with AI is productive. Many firms are hiring AI-knowledgeable staff, re-training others, however the costs have yet to be fully understood. It remains to be seen as to what extent AI is actually needed in the FO, an entity and environment dedicated to human relationships.

While there are other concerns around AI, some are less of an emerging issue for FOs but still concerning for society: loss of human influence and creativity. The strategic and practical impacts are real and require critical thinking, and peer conversations could go a long way to achieving integrated, collaborative, cost-efficient thinking.

So, with all the potential for negativity – what should FOs do? Consider the following:

-- Pause the use and the integration of AI in family office operations until such time as family office leadership fully understands the impacts. With guardrails in place, AI can be an amazing resource for family offices – a safe protocol for adoption is the best approach; 

-- Prohibit the input of family information, whether personal or financial, into an AI or an AI-connected model or system, until the consequences of doing so are known; 

-- Get smart on AI, its potential uses and effects. Understand how it applies to your family, your investment portfolio, and your family legacy plan. Kiersten Todt put it clearly, “AI literacy is critical to the success of any family being involved with Al;"

-- Public AI models are being open-sourced , and private AI instances are now a reality. Consider bringing in experts – so that your organization can benefit from the force multiplier that is AI productivity, without the risk of data going off premise or sharing with a third party; 

-- Create a plan for monitoring AI that is adjacent to your family office operations; 

-- Where is it being used? Do some of your vendors or investors use it? For what purpose is it being used relative to your family’s situation? Work to understand the impacts of those adjacent AI operations; and 

-- From his experiences with family offices, Laurent Roux added, it’s important for FOs to “manage employee buy-in on the use of AI so the employee’s behavior is consistent with family/FO culture, and how they use it.”

At a minimum all Family Offices should take these five practical actions:

1. Understand AI and put in place a method for active learning on the topic, and include multiple generations from your family office in that process. Shared learning across generations in family offices during this technological upheaval will be critical; 

2. Much like family offices have employed investment systems to further their financial gain and limit their financial risk – apply a similar approach to the adoption and use of AI in and around family office operations. Create a listing of opportunities and threats relative to the use of AI in the FO; 

3. Do everything AI in moderation – once you start using it, there is no going back on data recovery for instance. You will not be less competitive if you move slowly on AI. Emphasize the need for human interaction, in AI driven tasks consistent with family values and ethics; 

4. To that end, employing third-party due diligence is key. AI is a force multiplier for operations. Family offices need to perform adequate due diligence to know where the data is going and whether the results returned are accurate; and 

5. Limit the potential negative impacts on your family office, by working through the above steps and analyzing the potential risks. Use of AI should be discussed and included in any FO governance conversations.

Stay positive, embrace a growth mindset about AI, but move smartly – this feeling was shared among the experts with whom I collaborated. AI can be an amazing enhancement to human society, we just have to be good stewards of this new technology so it works for us, not the other way around.